51 research outputs found

    Summarisation & Visualisation of Large Volumes of Time-Series Sensor Data

    Get PDF
    a number of sensors, including an electricity usage sensor supplied by Episensor. This poses our second With the increasing ubiquity of sensor data, challenge, how to summarise an extended period of presenting this data in a meaningful way to electrictiy usage data for a home user. users is a challenge that must be addressed before we can easily deploy real-world sensor network interfaces in the home or workplace. In this paper, we will present one solution to the visualisation of large quantities of sensor data that is easy to understand and yet provides meaningful and intuitive information to a user, even when examining many weeks or months of historical data. We will illustrate this visulalisation technique with two real-world deployments of sensing the person and sensing the home

    Mining user activity as a context source for search and retrieval

    Get PDF
    Nowadays in information retrieval it is generally accepted that if we can better understand the context of users then this could help the search process, either at indexing time by including more metadata or at retrieval time by better modelling the user context. In this work we explore how activity recognition from tri-axial accelerometers can be employed to model a user's activity as a means of enabling context-aware information retrieval. In this paper we discuss how we can gather user activity automatically as a context source from a wearable mobile device and we evaluate the accuracy of our proposed user activity recognition algorithm. Our technique can recognise four kinds of activities which can be used to model part of an individual's current context. We discuss promising experimental results, possible approaches to improve our algorithms, and the impact of this work in modelling user context toward enhanced search and retrieval

    Automatically detecting important moments from everyday life using a mobile device

    Get PDF
    This paper proposes a new method to detect important moments in our lives. Our work is motivated by the increase in the quantity of multimedia data, such as videos and photos, which are capturing life experiences into personal archives. Even though such media-rich data suggests visual processing to identify important moments, the oft-mentioned problem of the semantic gap means that users cannot automatically identify or retrieve important moments using visual processing techniques alone. Our approach utilises on-board sensors from mobile devices to automatically identify important moments, as they are happening

    A lifelogging system supporting multimodal access

    Get PDF
    Today, technology has progressed to allow us to capture our lives digitally such as taking pictures, recording videos and gaining access to WiFi to share experiences using smartphones. People’s lifestyles are changing. One example is from the traditional memo writing to the digital lifelog. Lifelogging is the process of using digital tools to collect personal data in order to illustrate the user’s daily life (Smith et al., 2011). The availability of smartphones embedded with different sensors such as camera and GPS has encouraged the development of lifelogging. It also has brought new challenges in multi-sensor data collection, large volume data storage, data analysis and appropriate representation of lifelog data across different devices. This study is designed to address the above challenges. A lifelogging system was developed to collect, store, analyse, and display multiple sensors’ data, i.e. supporting multimodal access. In this system, the multi-sensor data (also called data streams) is firstly transmitted from smartphone to server only when the phone is being charged. On the server side, six contexts are detected namely personal, time, location, social, activity and environment. Events are then segmented and a related narrative is generated. Finally, lifelog data is presented differently on three widely used devices which are the computer, smartphone and E-book reader. Lifelogging is likely to become a well-accepted technology in the coming years. Manual logging is not possible for most people and is not feasible in the long-term. Automatic lifelogging is needed. This study presents a lifelogging system which can automatically collect multi-sensor data, detect contexts, segment events, generate meaningful narratives and display the appropriate data on different devices based on their unique characteristics. The work in this thesis therefore contributes to automatic lifelogging development and in doing so makes a valuable contribution to the development of the field

    SenseSeer, a real‐time lifelogging tool

    Get PDF
    Smart phones are becoming increasingly powerful and can act as computers in our pockets, with always-on network access and a suite of sensors to help it interact with the environment and the user. Due to the fact that users tend to bring their mobile phone with them everywhere, the modern smartphone appears to be a key stepping stone towards the ‘Total Recall’ vision of Bell & Gemmel. In this work, we present a real-time lifelogging software that can work independently or in conjunction with a Sensecam. The software is called Life-lens and it runs on Android smartphones. It utilizes energy conservation software on the smartphone to support day-long sensor capture and build a semantically rich life narrative. All available sensors on the phone (incl camera, accelerometer, GPS, Bluetooth, etc…) are employed to capture the current user context. The main contribution of life-lens beyond previously existing solutions is that it is real-time in nature. Data gathered by the life-lens software is analyzed on the phone and uploaded dynamically as a life-stream to a central server where additional semantic analysis is performed to refine the event segmentation and perform more processor intensive operations such as face detection.  The server implements a WWW-based real-time interface that can automatically annotate the life-stream to generate the diary-style narrative of life activity. In this work, we present life-lens on the mobile device and on the WWW interface

    Evaluating access mechanisms for multimodal representations of lifelogs

    Get PDF
    Lifelogging, the automatic and ambient capture of daily life activities into a digital archive called a lifelog, is an increasingly popular activity with a wide range of applications areas including medical (memory support), behavioural science (analysis of quality of life), work- related (auto-recording of tasks) and more. In this paper we focus on lifelogging where there is sometimes a need to re-find something from one’s past, recent or distant, from the lifelog. To be effective, a lifelog should be accessible across a variety of access devices. In the work reported here we create eight lifelogging interfaces and evaluate their effectiveness on three access devices; laptop, smartphone and e-book reader,for a searching task. Based on tests with 16 users, we identify which of the eight interfaces are most effective for each access device in a known-item search task through the lifelog, for both the lifelog owner, and for other searchers. Our results are important in suggesting ways in which personal lifelogs can be most effectively used and accessed

    ZhiWo: Activity tagging and recognizing system from personal lifelogs

    Get PDF
    With the increasing use of mobile devices as personal record- ing, communication and sensing tools, extracting the seman- tics of life activities through sensed data (photos, accelerom- eter, GPS etc.) is gaining widespread public awareness. A person who engages in long-term personal sensing is engag- ing in a process of lifelogging. Lifelogging typically involves using a range of (wearable) sensors to capture raw data, to segment into discrete activities, to annotate and subse- quently to make accessible by search or browsing tools. In this paper, we present an intuitive lifelog activity record- ing and management system called ZhiWo. By using a su- pervised machine learning approach, sensed data collected by mobile devices are automatically classified into different types of daily human activities and these activities are inter- preted as life activity retrieval units for personal archives

    Green multimedia: informing people of their carbon footprint through two simple sensors

    Get PDF
    In this work we discuss a new, but highly relevant, topic to the multimedia community; systems to inform individuals of their carbon footprint, which could ultimately effect change in community carbon footprint-related activities. The reduction of carbon emissions is now an important policy driver of many governments, and one of the major areas of focus is in reducing the energy demand from the consumers i.e. all of us individually. In terms of CO2 generated from energy consumption, there are three predominant factors, namely electricity usage, thermal related costs, and transport usage. Standard home electricity and heating sensors can be used to measure the former two aspects, and in this paper we evaluate a novel technique to estimate an individual's transport-related carbon emissions through the use of a simple wearable accelerometer. We investigate how providing this novel estimation of transport-related carbon emissions through an interactive web site and mobile phone app engages a set of users in becoming more aware of their carbon emissions. Our evaluations involve a group of 6 users collecting 25 million accelerometer readings and 12.5 million power readings vs. a control group of 16 users collecting 29.7 million power readings

    From lifelog to diary: a timeline view for memory reminiscence

    Get PDF
    As digital recording sensors and lifelogging devices become more prevalent, the suitability of lifelogging tools to act as a reminiscence supporting tool has become an important research challenge. This paper aims to describe a rst- generation memory reminiscence tool that utilises lifelog- ging sensors to record a digital diary of user activities and presents it as a narrative description of user activities. The automatically recognised daily activities are shown chronologically in the timeline view

    Turning raw SenseCam accelerometer data into meaningful user activities

    Get PDF
    The onboard accelerometer is one of the most important sensors in the SenseCam, where it can influence the quality of photos captured by choosing the optional time to take pictures. Compared with other sensors, there are a number of advantages that the accelerometer has: Acceleration data is easy to be stored and processed, especially in comparison to the average of 4,000 images taken every day by the SenseCam which consume an amount of disk space. Acceleration data takes little space, and can be processed by the SenseCam’s on-board micro-processer in real-time. The most important information from SenseCam is image data, but it is difficult to take a clear photo when the user is moving very fast or is present in dark places, the accelerometer helps to avoid the problem of blurred image capture. No wireless signals are needed. While GPS techniques have been improved a lot in the past decade, determining location of the inside of buildings is limited by no clear line of sight to satellites in the sky. Low battery consumption. Compared with camera and GPS, it uses little battery. Now the battery is the key bottleneck for portable sensors. It is very inconvenient for users to constantly remember about charging battery all the time. The accelerometer can also act as an important source of evidence for automated content annotation, as described below. Given the above benefits of the accelerometer onboard the SenseCam, we now discuss the information which can be mined by analysing this raw acceleration data: 1. Activities detection: By analysing acceleration data, common daily activities can be recognised, such like sitting, walking, driving and lying. To recognise each different activity, we employ different binary-class SVM models for each activity, because different features from acceleration are used to recognise different activities. These activity recognition results can be used as an important resource for associating context with real-time lifelog information. 2. Calculate driving-related CO2: Environmental issues have been at the forefront of the public conscience of late. We believe it will be helpful if users can get real-time driving information and how much carbon they have produced by driving. To achieve this we have built SVM classifiers to identify driving related activity. Our driving detection can be improved by smoothing algorithms and also techniques to detect time spent at traffic lights. From this we can make an accurate estimation of driving related CO2 emissions
    corecore